2 research outputs found
Multi-Agent Reinforcement Learning is a Sequence Modeling Problem
Large sequence model (SM) such as GPT series and BERT has displayed
outstanding performance and generalization capabilities on vision, language,
and recently reinforcement learning tasks. A natural follow-up question is how
to abstract multi-agent decision making into an SM problem and benefit from the
prosperous development of SMs. In this paper, we introduce a novel architecture
named Multi-Agent Transformer (MAT) that effectively casts cooperative
multi-agent reinforcement learning (MARL) into SM problems wherein the task is
to map agents' observation sequence to agents' optimal action sequence. Our
goal is to build the bridge between MARL and SMs so that the modeling power of
modern sequence models can be unleashed for MARL. Central to our MAT is an
encoder-decoder architecture which leverages the multi-agent advantage
decomposition theorem to transform the joint policy search problem into a
sequential decision making process; this renders only linear time complexity
for multi-agent problems and, most importantly, endows MAT with monotonic
performance improvement guarantee. Unlike prior arts such as Decision
Transformer fit only pre-collected offline data, MAT is trained by online
trials and errors from the environment in an on-policy fashion. To validate
MAT, we conduct extensive experiments on StarCraftII, Multi-Agent MuJoCo,
Dexterous Hands Manipulation, and Google Research Football benchmarks. Results
demonstrate that MAT achieves superior performance and data efficiency compared
to strong baselines including MAPPO and HAPPO. Furthermore, we demonstrate that
MAT is an excellent few-short learner on unseen tasks regardless of changes in
the number of agents. See our project page at
https://sites.google.com/view/multi-agent-transformer
Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks
Offline reinforcement learning leverages previously-collected offline
datasets to learn optimal policies with no necessity to access the real
environment. Such a paradigm is also desirable for multi-agent reinforcement
learning (MARL) tasks, given the increased interactions among agents and with
the enviroment. Yet, in MARL, the paradigm of offline pre-training with online
fine-tuning has not been studied, nor datasets or benchmarks for offline MARL
research are available. In this paper, we facilitate the research by providing
large-scale datasets, and use them to examine the usage of the Decision
Transformer in the context of MARL. We investigate the generalisation of MARL
offline pre-training in the following three aspects: 1) between single agents
and multiple agents, 2) from offline pretraining to the online fine-tuning, and
3) to that of multiple downstream tasks with few-shot and zero-shot
capabilities. We start by introducing the first offline MARL dataset with
diverse quality levels based on the StarCraftII environment, and then propose
the novel architecture of multi-agent decision transformer (MADT) for effective
offline learning. MADT leverages transformer's modelling ability of sequence
modelling and integrates it seamlessly with both offline and online MARL tasks.
A crucial benefit of MADT is that it learns generalisable policies that can
transfer between different types of agents under different task scenarios. On
StarCraft II offline dataset, MADT outperforms the state-of-the-art offline RL
baselines. When applied to online tasks, the pre-trained MADT significantly
improves sample efficiency, and enjoys strong performance both few-short and
zero-shot cases. To our best knowledge, this is the first work that studies and
demonstrates the effectiveness of offline pre-trained models in terms of sample
efficiency and generalisability enhancements in MARL.Comment: 17 pages, 6 figure